SLES 15 SP2

您所在的位置:网站首页 libvirt domain xml format SLES 15 SP2

SLES 15 SP2

2023-10-14 21:52| 来源: 网络整理| 查看: 265

Applies to SUSE Linux Enterprise Server 15 SP214 Configuring Virtual Machines with virsh # 

You can use virsh to configure virtual machines (VM) on the command line as an alternative to using the Virtual Machine Manager. With virsh, you can control the state of a VM, edit the configuration of a VM or even migrate a VM to another host. The following sections describe how to manage VMs by using virsh.

14.1 Editing the VM Configuration # 

The configuration of a VM is stored in an XML file in /etc/libvirt/qemu/ and looks like this:

Example 14.1: Example XML Configuration File #  sles15 ab953e2f-9d16-4955-bb43-1178230ee625 2097152 2097152 2 hvm ... Skylake-Client-IBRS ... destroy restart destroy /usr/bin/qemu-system-x86_64 ... ...

If you want to edit the configuration of a VM Guest, check if it is offline:

> sudo virsh list --inactive

If your VM Guest is in this list, you can safely edit its configuration:

> sudo virsh edit NAME_OF_VM_GUEST

Before saving the changes, virsh validates your input against a RelaxNG schema.

14.2 Changing the Machine Type # 

When installing with the virt-install tool, the machine type for a VM Guest is pc-i440fx by default. The machine type is stored in the VM Guest's configuration file in the type element:

hvm

As an example, the following procedure shows how to change this value to the machine type q35. The value q35 is an Intel* chipset and includes PCIe, supports up to 12 USB ports, and has support for SATA and IOMMU.

Procedure 14.1: Changing Machine Type # 

Check whether your VM Guest is inactive:

> sudo virsh list --inactive Id Name State ---------------------------------------------------- - sles15 shut off

Edit the configuration for this VM Guest:

> sudo virsh edit sles15

Replace the value of the machine attribute with pc-q35-2.0 :

hvm

Restart the VM Guest:

> sudo virsh start sles15

Check if the machine type has changed. Log in to the VM Guest and run the following command:

> sudo dmidecode | grep Product Product Name: Standard PC (Q35 + ICH9, 2009)Tip: Machine Type Update Recommendations

Whenever the QEMU version on the host system is upgraded (for example, when upgrading the VM Host Server to a new service pack), upgrade the machine type of the VM Guests to the latest available version. To check, use the command qemu-system-x86_64 -M help on the VM Host Server.

The default machine type pc-i440fx, for example, is regularly updated. If your VM Guest still runs with a machine type of pc-i440fx-1.X, we strongly recommend an update to pc-i440fx-2.X. This allows taking advantage of the most recent updates and corrections in machine definitions, and ensures better future compatibility.

14.3 Configuring Hypervisor Features # 

libvirt automatically enables a default set of hypervisor features that are sufficient in most circumstances, but also allows enabling and disabling features as needed. As an example, Xen does not support enabling PCI passthrough by default. It must be enabled with the passthrough setting. Hypervisor features can be configured with virsh. Look for the element in the VM Guest's configuration file and adjust the various features as required. Continuing with the Xen passthrough example:

> sudo virsh edit sle15sp1

Save your changes and restart the VM Guest.

See the Hypervisor features section of the libvirt Domain XML format manual at https://libvirt.org/formatdomain.html#elementsFeatures for more information.

14.4 Configuring CPU Allocation # 

The number of allocated CPUs is stored in the VM Guest's XML configuration file in /etc/libvirt/qemu/ in the vcpu element:

1

In this example, the VM Guest has only one allocated CPU. The following procedure shows how to change the number of allocated CPUs for the VM Guest:

Check whether your VM Guest is inactive:

> sudo virsh list --inactive Id Name State ---------------------------------------------------- - sles15 shut off

Edit the configuration for an existing VM Guest:

> sudo virsh edit sles15

Change the number of allocated CPUs:

2

Restart the VM Guest:

> sudo virsh start sles15

Check if the number of CPUs in the VM has changed.

> sudo virsh vcpuinfo sled15 VCPU: 0 CPU: N/A State: N/A CPU time N/A CPU Affinity: yy VCPU: 1 CPU: N/A State: N/A CPU time N/A CPU Affinity: yy14.5 Changing Boot Options # 

The boot menu of the VM Guest can be found in the os element and usually looks like this:

hvm readonly='yes' secure='no' type='rom'/>/usr/lib/xen/boot/hvmloader /var/lib/libvirt/nvram/guest_VARS.fd

In this example, two devices are available, hd and cdrom . The configuration also reflects the actual boot order, so the cdrom comes before the hd .

14.5.1 Changing Boot Order # 

The VM Guest's boot order is represented through the order of devices in the XML configuration file. As the devices are interchangeable, it is possible to change the boot order of the VM Guest.

Open the VM Guest's XML configuration.

> sudo virsh edit sles15

Change the sequence of the bootable devices.

... ...

Check if the boot order was changed successfully by looking at the boot menu in the BIOS of the VM Guest.

14.5.2 Using Direct Kernel Boot # 

Direct Kernel Boot allows you to boot from a kernel and initrd stored on the host. Set the path to both files in the kernel and initrd elements:

... /root/f8-i386-vmlinuz /root/f8-i386-initrd ...

To enable Direct Kernel Boot:

Open the VM Guest's XML configuration:

> sudo virsh edit sles15

Inside the os element, add a kernel element and the path to the kernel file on the host:

... /root/f8-i386-vmlinuz ...

Add an initrd element and the path to the initrd file on the host:

... /root/f8-i386-initrd ...

Start your VM to boot from the new kernel:

> sudo virsh start sles1514.6 Configuring Memory Allocation # 

The amount of memory allocated for the VM Guest can also be configured with virsh. It is stored in the memory element. Follow these steps:

Open the VM Guest's XML configuration:

> sudo virsh edit sles15

Search for the memory element and set the amount of allocated RAM:

... 524288 ...

Check the amount of allocated RAM in your VM by running:

> cat /proc/meminfo14.7 Adding a PCI Device # 

To assign a PCI device to VM Guest with virsh, follow these steps:

Identify the host PCI device to assign to the VM Guest. In the following example, we are assigning a DEC network card to the guest:

> sudo lspci -nn [...] 03:07.0 Ethernet controller [0200]: Digital Equipment Corporation DECchip \ 21140 [FasterNet] [1011:0009] (rev 22) [...]

Write down the device ID (03:07.0 in this case).

Gather detailed information about the device using virsh nodedev-dumpxml ID. To get the ID, replace the colon and the period in the device ID (03:07.0) with underscores. Prefix the result with “pci_0000_”: pci_0000_03_07_0.

> sudo virsh nodedev-dumpxml pci_0000_03_07_0 pci_0000_03_07_0 /sys/devices/pci0000:00/0000:00:14.4/0000:03:07.0 pci_0000_00_14_4 tulip 0 3 7 0 DECchip 21140 [FasterNet] Digital Equipment Corporation

Write down the values for domain, bus, and function (see the previous XML code printed in bold).

Detach the device from the host system prior to attaching it to the VM Guest:

> sudo virsh nodedev-detach pci_0000_03_07_0 Device pci_0000_03_07_0 detachedTip: Multi-Function PCI Devices

When using a multi-function PCI device that does not support FLR (function level reset) or PM (power management) reset, you need to detach all its functions from the VM Host Server. The whole device must be reset for security reasons. libvirt will refuse to assign the device if one of its functions is still in use by the VM Host Server or another VM Guest.

Convert the domain, bus, slot, and function value from decimal to hexadecimal. In our example, domain = 0, bus = 3, slot = 7, and function = 0. Ensure that the values are inserted in the right order:

> printf "\n" 0 3 7 0

This results in:

Run virsh edit on your domain, and add the following device entry in the section using the result from the previous step:

Tip: managed Compared to unmanaged

libvirt recognizes two modes for handling PCI devices: they can be either managed or unmanaged. In the managed case, libvirt handles all details of unbinding the device from the existing driver if needed, resetting the device, binding it to vfio-pci before starting the domain, etc. When the domain is terminated or the device is removed from the domain, libvirt unbinds from vfio-pci and rebinds to the original driver in the case of a managed device. If the device is unmanaged, the user must ensure all of these management aspects of the device are done before assigning it to a domain, and after the device is no longer used by the domain.

In the example above, the managed='yes' option means that the device is managed. To switch the device mode to unmanaged, set managed='no' in the listing above. If you do so, you need to take care of the related driver with the virsh nodedev-detach and virsh nodedev-reattach commands. Prior to starting the VM Guest you need to detach the device from the host by running virsh nodedev-detach pci_0000_03_07_0. In case the VM Guest is not running, you can make the device available for the host by running virsh nodedev-reattach pci_0000_03_07_0.

Shut down the VM Guest and disable SELinux if it is running on the host.

> sudo setsebool -P virt_use_sysfs 1

Start your VM Guest to make the assigned PCI device available:

> sudo virsh start sles15Important: SLES11 SP4 KVM guests

On a newer QEMU machine type (pc-i440fx-2.0 or higher) with SLES11 SP4 KVM guests, the acpiphp module is not loaded by default in the guest. This module must be loaded to enable hotplugging of disk and network devices. To load the module manually, use the command modprobe acpiphp. It is also possible to autoload the module by adding install acpiphp /bin/true to the /etc/modprobe.conf.local file.

Important: KVM guests using QEMU Q35 machine type

KVM guests using the QEMU Q35 machine type have a PCI topology that includes a pcie-root controller and seven pcie-root-port controllers. The pcie-root controller does not support hotplug. Each pcie-root-port controller supports hotplugging a single PCIe device. PCI controllers cannot be hotplugged, so plan accordingly and add more pcie-root-ports if more than seven PCIe devices will be hotplugged. A pcie-to-pci-bridge controller can be added to support hotplugging legacy PCI devices. See https://libvirt.org/pci-hotplug.html for more information about PCI topology between QEMU machine types.

14.7.1 PCI Pass-Through for IBM Z # 

In order to support IBM Z, QEMU extended PCI representation by allowing to configure extra attributes. Two more attributes—uid and fid—were added to the libvirt specification. uid represents user-defined identifier, while fid represents PCI function identifier. These attributes are optional and if you do not specify them, they are automatically generated with non-conflicting values.

To include zPCI attribute in your domain specification, use the following example definition:

14.8 Adding a USB Device # 

To assign a USB device to VM Guest using virsh, follow these steps:

Identify the host USB device to assign to the VM Guest:

> sudo lsusb [...] Bus 001 Device 003: ID 0557:2221 ATEN International Co., Ltd Winbond Hermon [...]

Write down the vendor and product IDs. In our example, the vendor ID is 0557 and the product ID is 2221.

Run virsh edit on your domain, and add the following device entry in the section using the values from the previous step:

Tip: Vendor/Product or Device's Address

Instead of defining the host device with and IDs, you can use the element as described for host PCI devices in Section 14.7, “Adding a PCI Device”.

Shut down the VM Guest and disable SELinux if it is running on the host:

> sudo setsebool -P virt_use_sysfs 1

Start your VM Guest to make the assigned PCI device available:

> sudo virsh start sles1514.9 Adding SR-IOV Devices # 

Single Root I/O Virtualization (SR-IOV) capable PCIe devices can replicate their resources, so they appear to be multiple devices. Each of these “pseudo-devices” can be assigned to a VM Guest.

SR-IOV is an industry specification that was created by the Peripheral Component Interconnect Special Interest Group (PCI-SIG) consortium. It introduces physical functions (PF) and virtual functions (VF). PFs are full PCIe functions used to manage and configure the device. PFs also can move data. VFs lack the configuration and management part—they only can move data and a reduced set of configuration functions. As VFs do not have all PCIe functions, the host operating system or the Hypervisor must support SR-IOV to be able to access and initialize VFs. The theoretical maximum for VFs is 256 per device (consequently the maximum for a dual-port Ethernet card would be 512). In practice this maximum is much lower, since each VF consumes resources.

14.9.1 Requirements # 

The following requirements must be met to use SR-IOV:

An SR-IOV-capable network card (as of SUSE Linux Enterprise Server 15, only network cards support SR-IOV)

An AMD64/Intel 64 host supporting hardware virtualization (AMD-V or Intel VT-x), see Section 6.3, “KVM Hardware Requirements” for more information

A chipset that supports device assignment (AMD-Vi or Intel VT-d)

libvirt 0.9.10 or better

SR-IOV drivers must be loaded and configured on the host system

A host configuration that meets the requirements listed at Important: Requirements for VFIO and SR-IOV

A list of the PCI addresses of the VF(s) that will be assigned to VM Guests

Tip: Checking if a Device is SR-IOV-Capable

The information whether a device is SR-IOV-capable can be obtained from its PCI descriptor by running lspci. A device that supports SR-IOV reports a capability similar to the following:

Capabilities: [160 v1] Single Root I/O Virtualization (SR-IOV)Note: Adding an SR-IOV Device at VM Guest Creation

Before adding an SR-IOV device to a VM Guest when initially setting it up, the VM Host Server already needs to be configured as described in Section 14.9.2, “Loading and Configuring the SR-IOV Host Drivers”.

14.9.2 Loading and Configuring the SR-IOV Host Drivers # 

To access and initialize VFs, an SR-IOV-capable driver needs to be loaded on the host system.

Before loading the driver, make sure the card is properly detected by running lspci. The following example shows the lspci output for the dual-port Intel 82576NS network card:

> sudo /sbin/lspci | grep 82576 01:00.0 Ethernet controller: Intel Corporation 82576NS Gigabit Network Connection (rev 01) 01:00.1 Ethernet controller: Intel Corporation 82576NS Gigabit Network Connection (rev 01) 04:00.0 Ethernet controller: Intel Corporation 82576NS Gigabit Network Connection (rev 01) 04:00.1 Ethernet controller: Intel Corporation 82576NS Gigabit Network Connection (rev 01)

In case the card is not detected, it is likely that the hardware virtualization support in the BIOS/EFI has not been enabled. To check if hardware virtualization support is enabled, look at the settings in the host's BIOS.

Check whether the SR-IOV driver is already loaded by running lsmod. In the following example a check for the igb driver (for the Intel 82576NS network card) returns a result. That means the driver is already loaded. If the command returns nothing, the driver is not loaded.

> sudo /sbin/lsmod | egrep "^igb " igb 185649  0

Skip the following step if the driver is already loaded. If the SR-IOV driver is not yet loaded, the non-SR-IOV driver needs to be removed first, before loading the new driver. Use rmmod to unload a driver. The following example unloads the non-SR-IOV driver for the Intel 82576NS network card:

> sudo /sbin/rmmod igbvf

Load the SR-IOV driver subsequently using the modprobe command—the VF parameter (max_vfs) is mandatory:

> sudo /sbin/modprobe igb max_vfs=8

As an alternative, you can also load the driver via SYSFS:

Find the PCI ID of the physical NIC by listing Ethernet devices:

> sudo lspci | grep Eth 06:00.0 Ethernet controller: Emulex Corporation OneConnect NIC (Skyhawk) (rev 10) 06:00.1 Ethernet controller: Emulex Corporation OneConnect NIC (Skyhawk) (rev 10)

To enable VFs, echo the number of desired VFs to load to the sriov_numvfs parameter:

> sudo echo 1 > /sys/bus/pci/devices/0000:06:00.1/sriov_numvfs

Verify that the VF NIC was loaded:

> sudo lspci | grep Eth 06:00.0 Ethernet controller: Emulex Corporation OneConnect NIC (Skyhawk) (rev 10) 06:00.1 Ethernet controller: Emulex Corporation OneConnect NIC (Skyhawk) (rev 10) 06:08.0 Ethernet controller: Emulex Corporation OneConnect NIC (Skyhawk) (rev 10)

Obtain the maximum number of VFs available:

> sudo lspci -vvv -s 06:00.1 | grep 'Initial VFs' Initial VFs: 32, Total VFs: 32, Number of VFs: 0, Function Dependency Link: 01

Create a /etc/systemd/system/before.service file which loads VF via SYSFS on boot:

[Unit] Before= [Service] Type=oneshot RemainAfterExit=true ExecStart=/bin/bash -c "echo 1 > /sys/bus/pci/devices/0000:06:00.1/sriov_numvfs" # beware, executable is run directly, not through a shell, check the man pages # systemd.service and systemd.unit for full syntax [Install] # target in which to start the service WantedBy=multi-user.target #WantedBy=graphical.target

Prior to starting the VM, it is required to create another service file (after-local.service) pointing to the /etc/init.d/after.local script that detaches the NIC. Otherwise the VM would fail to start:

[Unit] Description=/etc/init.d/after.local Compatibility After=libvirtd.service Requires=libvirtd.service [Service] Type=oneshot ExecStart=/etc/init.d/after.local RemainAfterExit=true [Install] WantedBy=multi-user.target

Copy it to /etc/systemd/system.

#! /bin/sh # ... virsh nodedev-detach pci_0000_06_08_0

Save it as /etc/init.d/after.local.

Reboot the machine and check if the SR-IOV driver is loaded by re-running the lspci command from the first step of this procedure. If the SR-IOV driver was loaded successfully you should see additional lines for the VFs:

01:00.0 Ethernet controller: Intel Corporation 82576NS Gigabit Network Connection (rev 01) 01:00.1 Ethernet controller: Intel Corporation 82576NS Gigabit Network Connection (rev 01) 01:10.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01) 01:10.1 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01) 01:10.2 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01) [...] 04:00.0 Ethernet controller: Intel Corporation 82576NS Gigabit Network Connection (rev 01) 04:00.1 Ethernet controller: Intel Corporation 82576NS Gigabit Network Connection (rev 01) 04:10.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01) 04:10.1 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01) 04:10.2 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01) [...]14.9.3 Adding a VF Network Device to a VM Guest # 

When the SR-IOV hardware is properly set up on the VM Host Server, you can add VFs to VM Guests. To do so, you need to collect some data first.

Procedure 14.2: Adding a VF Network Device to an Existing VM Guest # 

The following procedure uses example data. Make sure to replace it by appropriate data from your setup.

Use the virsh nodedev-list command to get the PCI address of the VF you want to assign and its corresponding PF. Numerical values from the lspci output shown in Section 14.9.2, “Loading and Configuring the SR-IOV Host Drivers” (for example 01:00.0 or 04:00.1) are transformed by adding the prefix "pci_0000_" and by replacing colons and dots with underscores. So a PCI ID listed as "04:00.0" by lspci is listed as "pci_0000_04_00_0" by virsh. The following example lists the PCI IDs for the second port of the Intel 82576NS network card:

> sudo virsh nodedev-list | grep 0000_04_ pci_0000_04_00_0 pci_0000_04_00_1 pci_0000_04_10_0 pci_0000_04_10_1 pci_0000_04_10_2 pci_0000_04_10_3 pci_0000_04_10_4 pci_0000_04_10_5 pci_0000_04_10_6 pci_0000_04_10_7 pci_0000_04_11_0 pci_0000_04_11_1 pci_0000_04_11_2 pci_0000_04_11_3 pci_0000_04_11_4 pci_0000_04_11_5

The first two entries represent the PFs, whereas the other entries represent the VFs.

Run the following virsh nodedev-dumpxml command on the PCI ID of the VF you want to add:

> sudo virsh nodedev-dumpxml pci_0000_04_10_0 pci_0000_04_10_0 pci_0000_00_02_0 0 4 16 0 82576 Virtual Function Intel Corporation

The following data is needed for the next step:

0

4

16

0

Create a temporary XML file (for example /tmp/vf-interface.xml containing the data necessary to add a VF network device to an existing VM Guest. The minimal content of the file needs to look like the following:

1 2

1

VFs do not get a fixed MAC address; it changes every time the host reboots. When adding network devices the “traditional” way with hostdev, it would require to reconfigure the VM Guest's network device after each reboot of the host, because of the MAC address change. To avoid this kind of problem, libvirt introduced the hostdev value, which sets up network-specific data before assigning the device.

2

Specify the data you acquired in the previous step here.

In case a device is already attached to the host, it cannot be attached to a VM Guest. To make it available for guests, detach it from the host first:

> sudo virsh nodedev-detach pci_0000_04_10_0

Add the VF interface to an existing VM Guest:

> sudo virsh attach-device GUEST /tmp/vf-interface.xml --OPTION

GUEST needs to be replaced by the domain name, ID or UUID of the VM Guest. --OPTION can be one of the following:

--persistent

This option will always add the device to the domain's persistent XML. In addition, if the domain is running, it will be hotplugged.

--config

This option will only affect the persistent XML, even if the domain is running. The device will only show up in the VM Guest on next boot.

--live

This option will only affect a running domain. If the domain is inactive, the operation will fail. The device is not persisted in the XML and will not be available in the VM Guest on next boot.

--current

This option affects the current state of the domain. If the domain is inactive, the device is added to the persistent XML and will be available on next boot. If the domain is active, the device is hotplugged but not added to the persistent XML.

To detach a VF interface, use the virsh detach-device command, which also takes the options listed above.

14.9.4 Dynamic Allocation of VFs from a Pool # 

If you define the PCI address of a VF into a VM Guest's configuration statically as described in Section 14.9.3, “Adding a VF Network Device to a VM Guest”, it is hard to migrate such guest to another host. The host must have identical hardware in the same location on the PCI bus, or the VM Guest configuration must be modified prior to each start.

Another approach is to create a libvirt network with a device pool that contains all the VFs of an SR-IOV device. The VM Guest then references this network, and each time it is started, a single VF is dynamically allocated to it. When the VM Guest is stopped, the VF is returned to the pool, available for another guest.

14.9.4.1 Defining Network with Pool of VFs on VM Host Server # 

The following example of network definition creates a pool of all VFs for the SR-IOV device with its physical function (PF) at the network interface eth0 on the host:

passthrough

To use this network on the host, save the above code to a file, for example /tmp/passthrough.xml, and execute the following commands. Remember to replace eth0 with the real network interface name of your SR-IOV device's PF:

> sudo virsh net-define /tmp/passthrough.xml > sudo virsh net-autostart passthrough > sudo virsh net-start passthrough14.9.4.2 Configuring VM Guests to Use VF from the Pool # 

The following example of VM Guest device interface definition uses a VF of the SR-IOV device from the pool created in Section 14.9.4.1, “Defining Network with Pool of VFs on VM Host Server”. libvirt automatically derives the list of all VFs associated with that PF the first time the guest is started.

After the first VM Guest starts that uses the network with the pool of VFs, verify the list of associated VFs. Do so by running virsh net-dumpxml passthrough on the host.

passthrough a6a26429-d483-d4ed-3465-4436ac786437 14.10 Listing Attached Devices # 

Although there is no mechanism in virsh to list all VM Host Server's devices that have already been attached to its VM Guests, you can list all devices attached to a specific VM Guest by running the following command:

virsh dumpxml VMGUEST_NAME | xpath -e /domain/devices/hostdev

For example:

> sudo virsh dumpxml sles12 | -e xpath /domain/devices/hostdev Found 2 nodes: -- NODE -- -- NODE -- Tip: Listing SR-IOV Devices Attached via

For SR-IOV devices that are attached to the VM Host Server by means of , you need to use a different XPath query:

virsh dumpxml VMGUEST_NAME | xpath -e /domain/devices/interface/@type14.11 Configuring Storage Devices # 

Storage devices are defined within the disk element. The usual disk element supports several attributes. The following two attributes are the most important:

The type attribute describes the source of the virtual disk device. Valid values are file , block , dir , network , or volume .

The device attribute indicates how the disk is exposed to the VM Guest OS. As an example, possible values can include floppy , disk , cdrom , and others.

The following child elements are the most important:

driver contains the driver and the bus. These are used by the VM Guest to work with the new disk device.

The target element contains the device name under which the new disk is shown in the VM Guest. It also contains the optional bus attribute, which defines the type of bus on which the new disk should operate.

The following procedure shows how to add storage devices to the VM Guest:

Edit the configuration for an existing VM Guest:

> sudo virsh edit sles15

Add a disk element inside the disk element together with the attributes type and device:

Specify a driver element and use the default values:

Create a disk image, which will be used as a source for the new virtual disk device:

> sudo qemu-img create -f qcow2 /var/lib/libvirt/images/sles15.qcow2 32G

Add the path for the disk source:

Define the target device name in the VM Guest and the bus on which the disk should work:

Restart your VM:

> sudo virsh start sles15

Your new storage device should be available in the VM Guest OS.

14.12 Configuring Controller Devices # 

libvirt generally manages controllers automatically based on the type of virtual devices used by the VM Guest. If the VM Guest contains PCI and SCSI devices, PCI and SCSI controllers will be created and managed automatically. libvirt will also model controllers that are hypervisor-specific, for example, a virtio-serial controller for KVM VM Guests or a xenbus controller for Xen VM Guests. Although the default controllers and their configuration are generally fine, there may be use cases where controllers or their attributes need to be adjusted manually. For example, a virtio-serial controller may need more ports, or a xenbus controller may need more memory or more virtual interrupts.

The xenbus controller is unique in that it serves as the controller for all Xen paravirtual devices. If a VM Guest has many disk and/or network devices, the controller may need more memory. Xen's max_grant_frames attribute sets how many grant frames, or blocks of shared memory, are allocated to the xenbus controller for each VM Guest.

The default of 32 is enough in most circumstances, but a VM Guest with a large number of I/O devices and an I/O-intensive workload may experience performance issues due to grant frame exhaustion. The xen-diag can be used to check the current and maximum max_grant_frames values for dom0 and your VM Guests. The VM Guests must be running:

> sudo virsh list Id Name State -------------------------------- 0 Domain-0 running 3 sle15sp1 running > sudo xen-diag gnttab_query_size 0 domid=0: nr_frames=1, max_nr_frames=256 > sudo xen-diag gnttab_query_size 3 domid=3: nr_frames=3, max_nr_frames=32

The sle15sp1 guest is using only three frames out of 32. If you are seeing performance issues, and log entries that point to insufficient frames, increase the value with virsh. Look for the line in the guest's configuration file, and add the maxGrantFrames control element:

> sudo virsh edit sle15sp1

Save your changes and restart the guest. Now it should show your change:

> sudo xen-diag gnttab_query_size 3 domid=3: nr_frames=3, max_nr_frames=40

Similar to maxGrantFrames, the xenbus controller also supports maxEventChannels. Event channels are like paravirtual interrupts, and in conjunction with grant frames, form a data transfer mechanism for paravirtual drivers. They are also used for inter-processor interrupts. VM Guests with a large number of vCPUs and/or many paravirtual devices may need to increase the maximum default value of 1023. maxEventChannels can be changed similarly to maxGrantFrames:

> sudo virsh edit sle15sp1

See the Controllers section of the libvirt Domain XML format manual at https://libvirt.org/formatdomain.html#elementsControllers for more information.

14.13 Configuring Video Devices # 

When using the Virtual Machine Manager, only the Video device model can be defined. The amount of allocated VRAM or 2D/3D acceleration can only be changed in the XML configuration.

14.13.1 Changing the Amount of Allocated VRAM # 

Edit the configuration for an existing VM Guest:

> sudo virsh edit sles15

Change the size of the allocated VRAM:

...

Check if the amount of VRAM in the VM has changed by looking at the amount in the Virtual Machine Manager.

14.13.2 Changing the State of 2D/3D Acceleration # 

Edit the configuration for an existing VM Guest:

> sudo virsh edit sles15

To enable/disable 2D/3D acceleration, change the value of accel3d and accel2d accordingly:

... Tip: Enabling 2D/3D Acceleration

Only vbox video devices are capable of 2D/3D acceleration. You cannot enable it on other video devices.

14.14 Configuring Network Devices # 

This section describes how to configure specific aspects of virtual network devices by using virsh.

Find more details about libvirt network interface specification in https://libvirt.org/formatdomain.html#elementsDriverBackendOptions.

14.14.1 Scaling Network Performance with Multiqueue virtio-net # 

The multiqueue virtio-net feature scales the network performance by allowing the VM Guest's virtual CPUs to transfer packets in parallel. Refer to Section 30.3.3, “Scaling Network Performance with Multiqueue virtio-net” for more general information.

To enable multiqueue virtio-net for a specific VM Guest, edit its XML configuration as described in Section 14.1, “Editing the VM Configuration” and modify its network interface as follows:

[...] 14.15 Using Macvtap to Share VM Host Server Network Interfaces # 

Macvtap provides direct attachment of a VM Guest virtual interface to a host network interface. The macvtap-based interface extends the VM Host Server network interface and has its own MAC address on the same Ethernet segment. Typically, this is used to make both the VM Guest and the VM Host Server show up directly on the switch that the VM Host Server is connected to.

Note: Macvtap Cannot Be Used with a Linux Bridge

Macvtap cannot be used with network interfaces already connected to a Linux bridge. Before attempting to create the macvtap interface, remove the interface from the bridge.

Note: VM Guest to VM Host Server Communication with Macvtap

When using macvtap, a VM Guest can communicate with other VM Guests, and with other external hosts on the network. But it cannot communicate with the VM Host Server on which the VM Guest runs. This is the defined behavior of macvtap, because of the way the VM Host Server's physical Ethernet is attached to the macvtap bridge. Traffic from the VM Guest into that bridge that is forwarded to the physical interface cannot be bounced back up to the VM Host Server's IP stack. Similarly, traffic from the VM Host Server's IP stack that is sent to the physical interface cannot be bounced back up to the macvtap bridge for forwarding to the VM Guest.

Virtual network interfaces based on macvtap are supported by libvirt by specifying an interface type of direct. For example:

The operation mode of the macvtap device can be controlled with the mode attribute. The following list show its possible values and a description for each:

vepa: All VM Guest packets are sent to an external bridge. Packets whose destination is a VM Guest on the same VM Host Server as where the packet originates from are sent back to the VM Host Server by the VEPA capable bridge (today's bridges are typically not VEPA capable).

bridge: Packets whose destination is on the same VM Host Server as where they originate from are directly delivered to the target macvtap device. Both origin and destination devices need to be in bridge mode for direct delivery. If either one of them is in vepa mode, a VEPA capable bridge is required.

private: All packets are sent to the external bridge and will only be delivered to a target VM Guest on the same VM Host Server if they are sent through an external router or gateway and that device sends them back to the VM Host Server. This procedure is followed if either the source or destination device is in private mode.

passthrough: A special mode that gives more power to the network interface. All packets will be forwarded to the interface, allowing virtio VM Guests to change the MAC address or set promiscuous mode to bridge the interface or create VLAN interfaces on top of it. Note that a network interface is not shareable in passthrough mode. Assigning an interface to a VM Guest will disconnect it from the VM Host Server. For this reason SR-IOV virtual functions are often assigned to the VM Guest in passthrough mode.

14.16 Disabling a Memory Balloon Device # 

Memory Balloon has become a default option for KVM. The device will be added to the VM Guest explicitly, so you do not need to add this element in the VM Guest's XML configuration. However, if you want to disable Memory Balloon in the VM Guest for any reason, you need to set model='none' as shown below:

14.17 Configuring Multiple Monitors (Dual Head) # 

libvirt supports a dual head configuration to display the video output of the VM Guest on multiple monitors.

Important: No Support for Xen

The Xen hypervisor does not support dual head configuration.

Procedure 14.3: Configuring Dual Head # 

While the virtual machine is running, verify that the xf86-video-qxl package is installed in the VM Guest:

> rpm -q xf86-video-qxl

Shut down the VM Guest and start editing its configuration XML as described in Section 14.1, “Editing the VM Configuration”.

Verify that the model of the virtual graphics card is 'qxl':

Increase the heads parameter in the graphics card model specification from the default 1 to 2, for example:

Configure the virtual machine to use the Spice display instead of VNC:

Start the virtual machine and connect to its display with virt-viewer, for example:

> virt-viewer --connect qemu+ssh://USER@VM_HOST/system

From the list of VMs, select the one whose configuration you have modified and confirm with Connect.

After the graphical subsystem (Xorg) loads in the VM Guest, select View › Displays › Display 2 to open a new window with the second monitor's output.

14.18 Crypto adapter pass-through to KVM guests on IBM Z # 14.18.1 Introduction # 

IBM Z machines include cryptographic hardware with useful functions such as random number generation, digital signature generation, or encryption. KVM allows dedicating these crypto adapters to guests as pass-through devices. The means that the hypervisor cannot observe communications between the guest and the device.

14.18.2 What is covered # 

You will learn how to dedicate a crypto adapter and domains on a IBM Z host to a KVM guest. The procedure includes the following basic steps:

Mask the crypto adapter and domains from the default driver on the host.

Load the vfio-ap driver.

Assign the crypto adapter and domains to the vfio-ap driver.

Configure the guest to use the crypto adapter.

14.18.3 Requirements # 

You need to have the QEMU / libvirt virtualization environment correctly installed and functional.

The vfio_ap and vfio_mdev modules for the running kernel need to be available on the host operating system.

14.18.4 Dedicate a crypto adapter to a KVM host # 

Verify that the vfio_ap and vfio_mdev kernel modules are loaded on the host:

> lsmod | grep vfio_

If any of them is not listed, load it manually, for example:

> sudo modprobe vfio_mdev

Create a new MDEV device on the host and verify that it was added:

uuid=$(uuidgen) $ echo ${uuid} | sudo tee /sys/devices/vfio_ap/matrix/mdev_supported_types/vfio_ap-passthrough/create dmesg | tail [...] [272197.818811] iommu: Adding device 24f952b3-03d1-4df2-9967-0d5f7d63d5f2 to group 0 [272197.818815] vfio_mdev 24f952b3-03d1-4df2-9967-0d5f7d63d5f2: MDEV: group_id = 0

Identify the device on the host's logical partition that you intend to dedicate to a KVM guest:

> ls -l /sys/bus/ap/devices/ [...] lrwxrwxrwx 1 root root 0 Nov 23 03:29 00.0016 -> ../../../devices/ap/card00/00.0016/ lrwxrwxrwx 1 root root 0 Nov 23 03:29 card00 -> ../../../devices/ap/card00/

In this example, it is card 0 queue 16. To match the Hardware Management Console (HMC) configuration, you need to convert from 16 hexadecimal to 22 decimal.

Mask the adapter from the zcrypt use:

> lszcrypt CARD.DOMAIN TYPE MODE STATUS REQUEST_CNT ------------------------------------------------- 00 CEX5C CCA-Coproc online 5 00.0016 CEX5C CCA-Coproc online 5

Mask the adapter:

> cat /sys/bus/ap/apmask 0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff echo -0x0 | sudo tee /sys/bus/ap/apmask 0x7fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff

Mask the domain:

> cat /sys/bus/ap/aqmask 0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff echo -0x0 | sudo tee /sys/bus/ap/aqmask 0xfffffdffffffffffffffffffffffffffffffffffffffffffffffffffffffffff

Assign adapter 0 and domain 16 (22 decimal) to vfio-ap:

> sudo echo +0x0 > /sys/devices/vfio_ap/matrix/${uuid}/assign_adapter > echo +0x16 | sudo tee /sys/devices/vfio_ap/matrix/${uuid}/assign_domain > echo +0x16 | sudo tee /sys/devices/vfio_ap/matrix/${uuid}/assign_control_domain

Verify the matrix that you have configured:

> cat /sys/devices/vfio_ap/matrix/${uuid}/matrix 00.0016

Either create a new VM (refer to Chapter 8, Guest Installation) and wait until it is initialized, or use an existing VM. In both cases, make sure the VM is shut down.

Change its configuration to use the MDEV device:

> sudo virsh edit VM_NAME [...] [...]

Restart the VM:

> sudo virsh reboot VM_NAME

Log in to the guest and verify that the adapter is present:

> lszcrypt CARD.DOMAIN TYPE MODE STATUS REQUEST_CNT ------------------------------------------------- 00 CEX5C CCA-Coproc online 1 00.0016 CEX5C CCA-Coproc online 114.18.5 Further reading # 

The installation of virtualization components is detailed in Chapter 5, Installation of Virtualization Components.

The vfio_ap architecture is detailed in https://www.kernel.org/doc/Documentation/s390/vfio-ap.txt.

A general outline together with a detailed procedure is described in https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1787405.

The architecture of VFIO Mediated devices (MDEVs) is detailed in https://www.kernel.org/doc/html/latest/driver-api/vfio-mediated-device.html.



【本文地址】


今日新闻


推荐新闻


CopyRight 2018-2019 办公设备维修网 版权所有 豫ICP备15022753号-3